Book Review: Diana C Mutz, In-Your-Face Politics: The Consequences of Uncivil Media
In: Political studies review, Band 16, Heft 1, S. NP76-NP76
ISSN: 1478-9302
10 Ergebnisse
Sortierung:
In: Political studies review, Band 16, Heft 1, S. NP76-NP76
ISSN: 1478-9302
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 27, Heft 1, S. 101-106
ISSN: 1476-4989
In: Political science research and methods: PSRM, S. 1-1
ISSN: 2049-8489
In: Political science research and methods: PSRM, S. 1-8
ISSN: 2049-8489
Abstract
Hyperparameters critically influence how well machine learning models perform on unseen, out-of-sample data. Systematically comparing the performance of different hyperparameter settings will often go a long way in building confidence about a model's performance. However, analyzing 64 machine learning related manuscripts published in three leading political science journals (APSR, PA, and PSRM) between 2016 and 2021, we find that only 13 publications (20.31 percent) report the hyperparameters and also how they tuned them in either the paper or the appendix. We illustrate the dangers of cursory attention to model and tuning transparency in comparing machine learning models' capability to predict electoral violence from tweets. The tuning of hyperparameters and their documentation should become a standard component of robustness checks for machine learning models.
In: Politische Vierteljahresschrift: PVS : German political science quarterly, Band 61, Heft 1, S. 111-130
ISSN: 1862-2860
ZusammenfassungNahezu die Hälfte der Bundestagsmandate wird über die Direktwahl in den Wahlkreisen vergeben. Das bleibt in einem Großteil der Wahlprognosemodelle jedoch unberücksichtigt. In diesem Beitrag stellen wir einen Ansatz zur Vorhersage der Erststimmenanteile in Wahlkreisen für Bundestagswahlen vor. Dazu kombinieren wir das Zweitstimmenvorhersagemodell von zweitstimme.org mit zwei Erststimmenmodellen, einer linearen Regression und einem künstlichen neuronalen Netzwerk, welche Kandidierenden- und Wahlkreischarakteristika zur Vorhersage nutzen. Für unseren Ansatz sind alle verwendeten Daten vor der jeweiligen Wahl öffentlich verfügbar und somit für eine echte Vorhersage nutzbar. Das Modell kann so bei künftigen Wahlen wertvolle Informationen für Kandidierende und die interessierte Öffentlichkeit bereitstellen. Die Vorhersagen sind darüber hinaus auch für erklärende Forschung relevant: Mithilfe der resultierenden Gewinnwahrscheinlichkeiten lassen sich bessere Messinstrumente zur Charakterisierung der Kompetitivität eines Wahlkreises und der zu erwartenden Knappheit des Wahlkreisrennens erstellen, welche politisches Verhalten beeinflussen können. Zudem erlaubt die Vorhersage, empirische Aussagen zur zu erwartenden Größe des Bundestags sowie seiner personellen Zusammensetzung zu treffen.
In: Politische Vierteljahresschrift: PVS : German political science quarterly, Band 58, Heft 3, S. 418-441
ISSN: 1862-2860
In: PS: political science & politics, Band 55, Heft 1, S. 85-90
ISSN: 1537-5935
In: Political analysis: PA ; the official journal of the Society for Political Methodology and the Political Methodology Section of the American Political Science Association, Band 27, Heft 2, S. 255-262
ISSN: 1476-4989
We offer a dynamic Bayesian forecasting model for multiparty elections. It combines data from published pre-election public opinion polls with information from fundamentals-based forecasting models. The model takes care of the multiparty nature of the setting and allows making statements about the probability of other quantities of interest, such as the probability of a plurality of votes for a party or the majority for certain coalitions in parliament. We present results from twoex anteforecasts of elections that took place in 2017 and are able to show that the model outperforms fundamentals-based forecasting models in terms of accuracy and the calibration of uncertainty. Provided that historical and current polling data are available, the model can be applied to any multiparty setting.
In: Proceedings of the National Academy of Sciences of the United States of America (PNAS), Band 119, Heft 44, S. 1-8
This study explores how researchers' analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers' expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team's workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers' results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.